15381: ARTIFICIAL INTELLIGENCE (FALL 2014) Homework 2: Planning, MDPs, and Reinforcement Learning (Solutions)

ثبت نشده
چکیده

Let S be a set of disjoint obstacles (simple polygons) in the plane. We use n to denote the total number of their edges. Assume that we have a point robot moving on the plane and that it can “walk” on the edges of the obstacles (that is, we treat the obstacles as open sets). The robot starts from pstart position and has to get to pgoal position using the shortest collision-free path. In class, we proved that any shortest path between pstart and pgoal is a polygonal path whose inner vertices are the vertices of the obstacles. You may use this result in your answer to the following questions.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Towards a Unified Theory of State Abstraction for MDPs

State abstraction (or state aggregation) has been extensively studied in the fields of artificial intelligence and operations research. Instead of working in the ground state space, the decision maker usually finds solutions in the abstract state space much faster by treating groups of states as a unit by ignoring irrelevant state information. A number of abstractions have been proposed and stu...

متن کامل

Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning

Learning, planning, and representing knowledge at multiple levels of temporal abstraction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforcement learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for takin...

متن کامل

Feature Reinforcement Learning: Part I. Unstructured MDPs

General-purpose, intelligent, learning agents cycle through sequences of observations, actions, and rewards that are complex, uncertain, unknown, and non-Markovian. On the other hand, reinforcement learning is well-developed for small finite state Markov decision processes (MDPs). Up to now, extracting the right state representations out of bare observations, that is, reducing the general agent...

متن کامل

Efficient Abstraction Selection in Reinforcement Learning

This paper introduces a novel approach for abstraction selection in reinforcement learning problems modelled as factored Markov decision processes (MDPs), for which a state is described via a set of state components. In abstraction selection, an agent must choose an abstraction from a set of candidate abstractions, each build up from a different combination ofions, each build up from a differen...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014